14 research outputs found

    Variational and learning models for image and time series inverse problems

    Get PDF
    Inverse problems are at the core of many challenging applications. Variational and learning models provide estimated solutions of inverse problems as the outcome of specific reconstruction maps. In the variational approach, the result of the reconstruction map is the solution of a regularized minimization problem encoding information on the acquisition process and prior knowledge on the solution. In the learning approach, the reconstruction map is a parametric function whose parameters are identified by solving a minimization problem depending on a large set of data. In this thesis, we go beyond this apparent dichotomy between variational and learning models and we show they can be harmoniously merged in unified hybrid frameworks preserving their main advantages. We develop several highly efficient methods based on both these model-driven and data-driven strategies, for which we provide a detailed convergence analysis. The arising algorithms are applied to solve inverse problems involving images and time series. For each task, we show the proposed schemes improve the performances of many other existing methods in terms of both computational burden and quality of the solution. In the first part, we focus on gradient-based regularized variational models which are shown to be effective for segmentation purposes and thermal and medical image enhancement. We consider gradient sparsity-promoting regularized models for which we develop different strategies to estimate the regularization strength. Furthermore, we introduce a novel gradient-based Plug-and-Play convergent scheme considering a deep learning based denoiser trained on the gradient domain. In the second part, we address the tasks of natural image deblurring, image and video super resolution microscopy and positioning time series prediction, through deep learning based methods. We boost the performances of supervised, such as trained convolutional and recurrent networks, and unsupervised deep learning strategies, such as Deep Image Prior, by penalizing the losses with handcrafted regularization terms

    Recurrent Neural Networks Applied to GNSS Time Series for Denoising and Prediction

    Get PDF
    Global Navigation Satellite Systems (GNSS) are systems that continuously acquire data and provide position time series. Many monitoring applications are based on GNSS data and their efficiency depends on the capability in the time series analysis to characterize the signal content and/or to predict incoming coordinates. In this work we propose a suitable Network Architecture, based on Long Short Term Memory Recurrent Neural Networks, to solve two main tasks in GNSS time series analysis: denoising and prediction. We carry out an analysis on a synthetic time series, then we inspect two real different case studies and evaluate the results. We develop a non-deep network that removes almost the 50% of scattering from real GNSS time series and achieves a coordinate prediction with 1.1 millimeters of Mean Squared Error

    Combining Weighted Total Variation and Deep Image Prior for natural and medical image restoration via ADMM

    Full text link
    In the last decades, unsupervised deep learning based methods have caught researchers attention, since in many real applications, such as medical imaging, collecting a great amount of training examples is not always feasible. Moreover, the construction of a good training set is time consuming and hard because the selected data have to be enough representative for the task. In this paper, we focus on the Deep Image Prior (DIP) framework and we propose to combine it with a space-variant Total Variation regularizer with an automatic estimation of the local regularization parameters. Differently from other existing approaches, we solve the arising minimization problem via the flexible Alternating Direction Method of Multipliers (ADMM). Furthermore, we provide a specific implementation also for the standard isotropic Total Variation. The promising performances of the proposed approach, in terms of PSNR and SSIM values, are addressed through several experiments on simulated as well as real natural and medical corrupted images.Comment: conference pape

    DeepCEL0 for 2D Single Molecule Localization in Fluorescence Microscopy

    Get PDF
    In fluorescence microscopy, Single Molecule Localization Microscopy (SMLM) techniques aim at localizing with high precision high density fluorescent molecules by stochastically activating and imaging small subsets of blinking emitters. Super Resolution (SR) plays an important role in this field since it allows to go beyond the intrinsic light diffraction limit. In this work, we propose a deep learning-based algorithm for precise molecule localization of high density frames acquired by SMLM techniques whose â„“2\ell_{2}-based loss function is regularized by positivity and â„“0\ell_{0}-based constraints. The â„“0\ell_{0} is relaxed through its Continuous Exact â„“0\ell_{0} (CEL0) counterpart. The arising approach, named DeepCEL0, is parameter-free, more flexible, faster and provides more precise molecule localization maps if compared to the other state-of-the-art methods. We validate our approach on both simulated and real fluorescence microscopy data

    Efficient â„“0\ell^0 gradient-based Super Resolution for simplified image segmentation

    No full text
    International audienceWe consider a variational model for single-image super-resolution based on the assumption that the gradient of the target image is sparse. We enforce this assumption by considering both an isotropic and an anisotropic â„“0â„“^0 regularisation on the image gradient combined with a quadratic data fidelity, similarly as studied in \cite{Storath14} for signal recovery problems. For the numerical realisation of the model, we propose a novel efficient ADMM splitting algorithm whose substeps solutions are computed efficiently by means of hard-thresholding and standard conjugate-gradient solvers. We test our model on highly-degraded synthetic and real-world data and quantitatively compare our results with several sparsity-promoting variational approaches as well as with state-of-the-art deep-learning techniques. Our experiments show that thanks to the â„“0â„“^0 smoothing on the gradient, the super-resolved images can be used to improve the accuracy of standard segmentation algorithms for applications like QR codes and cell detection and land-cover classification problems

    On the inverse Potts functional for single-image super-resolution problems

    No full text
    10 pages + appendices, 7 figuresWe consider a variational model for single-image super-resolution based on the assumption that the image gradient of the target image is sparse. To promote jump sparsity, we use an isotropic and anisotropic â„“0\ell^{0} inverse Potts gradient regularisation term combined with a quadratic data fidelity, similarly as studied in [1] for general problems in signal recovery. For the numerical realisation of the model, we consider a converging ADMM algorithm. Differently from [1], [2], where approximate graph cuts and dynamic programming techniques were used for solving the non-convex substeps in the case of multivariate data, the proposed splitting allows to compute explicitly their solution by means of hard-thresholding and standard conjugate-gradient solvers. We compare quantitatively our results with several convex, nonconvex and deep-learning-based approaches for several synthetic and real-world data. Our numerical results show that combining super-resolution with gradient sparsity is particularly helpful for object detection and labelling tasks (such as QR scanning and land-cover classification), for which our results are shown to improve the classification precision of standard clustering algorithms and state-of-the art deep architectures [3]

    On the inverse Potts functional for single-image super-resolution problems

    No full text
    10 pages + appendices, 7 figuresWe consider a variational model for single-image super-resolution based on the assumption that the image gradient of the target image is sparse. To promote jump sparsity, we use an isotropic and anisotropic â„“0\ell^{0} inverse Potts gradient regularisation term combined with a quadratic data fidelity, similarly as studied in [1] for general problems in signal recovery. For the numerical realisation of the model, we consider a converging ADMM algorithm. Differently from [1], [2], where approximate graph cuts and dynamic programming techniques were used for solving the non-convex substeps in the case of multivariate data, the proposed splitting allows to compute explicitly their solution by means of hard-thresholding and standard conjugate-gradient solvers. We compare quantitatively our results with several convex, nonconvex and deep-learning-based approaches for several synthetic and real-world data. Our numerical results show that combining super-resolution with gradient sparsity is particularly helpful for object detection and labelling tasks (such as QR scanning and land-cover classification), for which our results are shown to improve the classification precision of standard clustering algorithms and state-of-the art deep architectures [3]
    corecore